25 research outputs found

    Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    Get PDF
    This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure

    Reconciling Compressive Sampling Systems for Spectrally-sparse Continuous-time Signals

    Get PDF
    The Random Demodulator (RD) and the Modulated Wideband Converter (MWC) are two recently proposed compressed sensing (CS) techniques for the acquisition of continuous-time spectrally-sparse signals. They extend the standard CS paradigm from sampling discrete, finite dimensional signals to sampling continuous and possibly infinite dimensional ones, and thus establish the ability to capture these signals at sub-Nyquist sampling rates. The RD and the MWC have remarkably similar structures (similar block diagrams), but their reconstruction algorithms and signal models strongly differ. To date, few results exist that compare these systems, and owing to the potential impacts they could have on spectral estimation in applications like electromagnetic scanning and cognitive radio, we more fully investigate their relationship in this paper. We show that the RD and the MWC are both based on the general concept of random filtering, but employ significantly different sampling functions. We also investigate system sensitivities (or robustness) to sparse signal model assumptions. Lastly, we show that "block convolution" is a fundamental aspect of the MWC, allowing it to successfully sample and reconstruct block-sparse (multiband) signals. Based on this concept, we propose a new acquisition system for continuous-time signals whose amplitudes are block sparse. The paper includes detailed time and frequency domain analyses of the RD and the MWC that differ, sometimes substantially, from published results.Comment: Corrected typos, updated Section 4.3, 30 pages, 8 figure

    FAD binding, cobinamide binding and active site communication in the corrin reductase (CobR)

    Get PDF
    Adenosylcobalamin, the coenzyme form of vitamin B12, is one Nature's most complex coenzyme whose de novo biogenesis proceeds along either an anaerobic or aerobic metabolic pathway. The aerobic synthesis involves reduction of the centrally chelated cobalt metal ion of the corrin ring from Co(II) to Co(I) before adenosylation can take place. A corrin reductase (CobR) enzyme has been identified as the likely agent to catalyse this reduction of the metal ion. Herein, we reveal how Brucella melitensis CobR binds its coenzyme FAD (flavin dinucleotide) and we also show that the enzyme can bind a corrin substrate consistent with its role in reduction of the cobalt of the corrin ring. Stopped-flow kinetics and EPR reveal a mechanistic asymmetry in CobR dimer that provides a potential link between the two electron reduction by NADH to the single electron reduction of Co(II) to Co(I)

    Sequential quantization for classification : the impact of structure and nonparametric estimates

    No full text
    Sequential quantization is a constrained quantization method in which elements of a real-valued vector are sequentially mapped (quantized) onto a finite set. These quantization systems are constrained in that they are not allowed to jointly process their data. Information is, however, shared to varying degrees among the quantizers. This thesis focuses on the design of these systems when the quantized data are used as the input to a classifier. The performance criteria are the f -divergences whose connections to optimal classification are well-known. Priority is placed on understanding how a sequential quantizer's structure affects performance, estimation strategies, and computational complexity. Structure serves as an unifying concept that can help assess the benefits, if any, of inter-quantizer communications in sequential systems. Four nonparametric estimation strategies are proposed and analyzed. The conditional estimation strategy mirrors the operation of sequential quantization structures and successively maximizes conditional divergences. The local estimation strategy optimizes the marginal divergences associated with the outputs of the component quantizers. Both of these strategies decompose into simpler optimization problems whose solutions are known, and though generally suboptimal, these strategies can produce optimal estimates. Conditional and local estimates also automatically satisfy a sequential quantizer's structural constraints and are highly scalable. The joint estimation strategy is based on a uniform fine resolution partition, simulated annealing, and mechanisms to ensure adherence to a sequential quantizer's structural constraints. Compared to the conditional or local strategies, the method produces superior estimates, but is more computationally demanding. The computational burden is, however, tempered by a sequential quantizer's structure, thus making the joint strategy a practical design method for some scenarios. Finally, we construct an empirical estimator using empirical risk minimization. It is shown that the estimation loss, that is, the divergence loss caused by using an empirical estimate relative to the optimal sequential quantizer, decays no worse than n -1/2 , where n, denotes the number of observations from each distribution. It is also shown that rates as fast as n -1 are possible depending on a particular assumption on the underlying distributions
    corecore